Skip to content

[Bugfix] fix kernel error for qwen3-omni#1602

Merged
hsliuustc0106 merged 1 commit intovllm-project:mainfrom
R2-Y:benchmark_bugfix
Mar 2, 2026
Merged

[Bugfix] fix kernel error for qwen3-omni#1602
hsliuustc0106 merged 1 commit intovllm-project:mainfrom
R2-Y:benchmark_bugfix

Conversation

@R2-Y
Copy link
Contributor

@R2-Y R2-Y commented Mar 2, 2026

PLEASE FILL IN THE PR DESCRIPTION HERE ENSURING ALL CHECKLIST ITEMS (AT THE BOTTOM) HAVE BEEN CONSIDERED.

Purpose

When vLLM groups requests into a batch, it builds sampling metadata where prompt_token_ids is a tensor with shape [num_reqs, max_prompt_len]. When request lengths are shorter than max_prompt_len, they are padded to the batch's maximum number of prompt tokens using the model's vocab_size. For multi-stage models, each stage has a different vocab_size. In Qwen3-Omni, the talker incorrectly uses the thinker's vocab_size during the sampling phase, causing an out-of-bounds computation error. I clamped the padding value of prompt_token_ids to match the correct vocab size for each stage.

This can help solve #1520 & #1532

Test Plan

bash benchmarks/qwen3-omni/vllm_omni/eval_qwen3_moe_omni.sh

Test Result

image image image image image image image image image
Essential Elements of an Effective PR Description Checklist
  • The purpose of the PR, such as "Fix some issue (link existing issues this PR will resolve)".
  • The test plan. Please provide the test scripts & test commands. Please state the reasons if your codes don't require additional test scripts. For test file guidelines, please check the test style doc
  • The test results. Please paste the results comparison before and after, or the e2e results.
  • (Optional) The necessary documentation update, such as updating supported_models.md and examples for a new model. Please run mkdocs serve to sync the documentation editions to ./docs.
  • (Optional) Release notes update. If your change is user-facing, please update the release notes draft.

BEFORE SUBMITTING, PLEASE READ https://github.com/vllm-project/vllm-omni/blob/main/CONTRIBUTING.md (anything written below this line will be removed by GitHub Actions)

@R2-Y R2-Y requested a review from hsliuustc0106 as a code owner March 2, 2026 08:39
Copy link

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: e71b4753fe

ℹ️ About Codex in GitHub

Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".

@R2-Y
Copy link
Contributor Author

R2-Y commented Mar 2, 2026

@amy-why-3459 @hsliuustc0106 PTAL

@R2-Y R2-Y force-pushed the benchmark_bugfix branch from e71b475 to 1dff83e Compare March 2, 2026 09:02
@R2-Y R2-Y changed the title fix kernel error for qwen3-omni [Bugfix] fix kernel error for qwen3-omni Mar 2, 2026
Copy link
Collaborator

@hsliuustc0106 hsliuustc0106 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is this the root cause for this problem or just a workround?

runtime:
devices: "1"
max_batch_size: 64
max_batch_size: 32
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why we need to change config?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we only use two cards now, batch size = 64 for code2wav will OOM for qwen3-omni convolution computation

Signed-off-by: Rein Yang <ruiruyang2@gmail.com>
@R2-Y R2-Y force-pushed the benchmark_bugfix branch from 1dff83e to 9098756 Compare March 2, 2026 09:06
@R2-Y
Copy link
Contributor Author

R2-Y commented Mar 2, 2026

is this the root cause for this problem or just a workround?

root cause

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ready label to trigger buildkite CI

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants